15 research outputs found

    PICOZOOM: A context sensitive multimodal zooming interface

    Get PDF
    This paper introduces a novel zooming interface deploying a pico projector that, instead of a second visual display, leverages audioscapes for contextual information. The technique enhances current flashlight metaphor approaches, supporting flexible usage within the domain of spatial augmented reality to focus on object or environment-related details. Within a user study we focused on quantifying the projection limitations related to depiction of details through the pico projector and validated the interaction approach. The quantified results of the study correlate pixel density, detail and proximity, which can greatly aid to design more effective, legible zooming interfaces for pico projectors - the study can form an example testbed that can be applied well for testing aberrations with other projectors. Furthermore, users rated the zooming technique using audioscapes well, showing the validity of the approach. The studies form the foundation for extending our work by detailing out the audio-visual approach and looking more closely in the role of real-world features on interpreting projected content

    Real-time simulation of camera errors and their effect on some basic robotic vision algorithms

    Get PDF
    We present a real-time approximate simulation of some camera errors and the effects these errors have on some common computer vision algorithms for robots. The simulation uses a software framework for real-time post processing of image data. We analyse the performance of some basic algorithms for robotic vision when adding modifications to images due to camera errors. The result of each algorithm / error combination is presented. This simulation is useful to tune robotic algorithms to make them more robust to imperfections of real cameras. © 2013 IEEE

    An analysis of eye-tracking data in foveated ray tracing

    Get PDF
    We present an analysis of eye tracking data produced during a quality-focused user study of our own foveated ray tracing method. Generally, foveated rendering serves the purpose of adapting actual rendering methods to a user’s gaze. This leads to performance improvements which also allow for the use of methods like ray tracing, which would be computationally too expensive otherwise, in fields like virtual reality (VR), where high rendering performance is important to achieve immersion, or fields like scientific and information visualization, where large amounts of data may hinder real-time rendering capabilities. We provide an overview of our rendering system itself as well as information about the data we collected during the user study, based on fixation tasks to be fulfilled during flights through virtual scenes displayed on a head-mounted display (HMD). We analyze the tracking data regarding its precision and take a closer look at the accuracy achieved by participants when focusing the fixation targets. This information is then put into context with the quality ratings given by the users, leading to a surprising relation between fixation accuracy and quality ratings.We would like to thank NVIDIA for providing us with two Quadro K6000 graphics cards for the user study, the Intel Visual Computing Institute, the European Union (EU) for the co-funding as part of the Dreamspace project, the German Federal Ministry for Economic Affairs and Energy (BMWi) for funding the MATEDIS ZIM project (grant no KF2644109) and the Federal Ministry of Education and Research (BMBF) for funding the project OLIVE

    Focus-plus-context techniques for picoprojection-based interaction

    Get PDF
    In this paper, we report on novel zooming interface methods that deploy a small handheld projector. Using mobile projections to visualize object/environment-related information on real objects introduces new aspects for zooming interfaces. Different approaches are investigated that focus on maintaining a level of context while exploring detailed information. Doing so, we propose methods that provide alternative contextual cues within a single projector and deploy the potential of zoom lenses to support a multilevel zooming approach. Furthermore, we look into the correlation between pixel density, distance to the target, and projection size. Alongside these techniques, we report on multiple user studies, in which we quantified the projection limitations and validated various interactive visualization approaches. Thereby, we focused on solving issues related to pixel density, brightness, and contrast that affect the design of more effective legible zooming interfaces for handheld projectors

    Hash-based hierarchical caching and layered filtering for interactive previews in global illumination rendering

    Get PDF
    Copyright © 2020 by the authors. Modern Monte-Carlo-based rendering systems still suffer from the computational complexity involved in the generation of noise-free images, making it challenging to synthesize interactive previews. We present a framework suited for rendering such previews of static scenes using a caching technique that builds upon a linkless octree. Our approach allows for memory-efficient storage and constant-time lookup to cache diffuse illumination at multiple hitpoints along the traced paths. Non-diffuse surfaces are dealt with in a hybrid way in order to reconstruct view-dependent illumination while maintaining interactive frame rates. By evaluating the visual fidelity against ground truth sequences and by benchmarking, we show that our approach compares well to low-noise path-traced results, but with a greatly reduced computational complexity, allowing for interactive frame rates. This way, our caching technique provides a useful tool for global illumination previews and multi-view rendering.German Federal Ministry for Economic Affairs and Energy (BMWi), funding the MoVISO ZIM-project under Grant No.: ZF4120902

    Boosting histogram-based denoising methods with gpu optimizations

    Get PDF
    We present a system which allows for guiding the image quality in global illumination (GI) methods by user-specified regions of interest (ROIs). This is done with either a tracked interaction device or a mouse-based method, making it possible to create a visualization with varying convergence rates throughout one image towards a GI solution. To achieve this, we introduce a scheduling approach based on Sparse Matrix Compression (SMC) for efficient generation and distribution of rendering tasks on the GPU that allows for altering the sampling density over the image plane. Moreover, we present a prototypical approach for filtering the newly, possibly sparse samples to a final image. Finally, we show how large-scale display systems can benefit from rendering with ROIs

    Guided high-quality rendering

    Get PDF
    © Springer International Publishing Switzerland 2015.We present a system which allows for guiding the image quality in global illumination (GI) methods by user-specified regions of interest (ROIs). This is done with either a tracked interaction device or a mouse-based method, making it possible to create a visualization with varying convergence rates throughout one image towards a GI solution. To achieve this, we introduce a scheduling approach based on Sparse Matrix Compression (SMC) for efficient generation and distribution of rendering tasks on the GPU that allows for altering the sampling density over the image plane. Moreover, we present a prototypical approach for filtering the newly, possibly sparse samples to a final image. Finally, we show how large-scale display systems can benefit from rendering with ROIs

    CAVE: ein High-End-Konzept der audiovisuellen räumlichen Mensch-Rechner-Interaktion

    No full text
    Das CAVE-Konzept zur raumorientierten Mensch-Rechner-Interaktion, das Anfang der Neunzigerjahre vorgestellt wurde, findet trotz des damit verbundenen erheblichen technischen Aufwands zunehmend Verbreitung. Dieser Beitrag gibt eine Einführung in die CAVE-Technik und geht auf existierende Installationen und Anwendungen ein. Ferner wird eine Einordnung in andere Techniken zur Audiovisuellen räumlichen Interaktion gegeben
    corecore